Model Report

Generated on 03 Dec 2024, 21:51   ●   48,842 original samples, 24,421 synthetic samples

Accuracy
95.9%
(98.8%)
Univariate 97.0%
(99.3%)
Bivariate 94.8%
(98.3%)
Similarity
Cosine Similarity 0.99977
(0.99998)
Discriminator AUC 57.9%
(50.5%)
Distances
Identical Matches 0.0%
(0.0%)
Average Distances 0.244
(0.244)

Correlations

Univariate Distributions

Bivariate Distributions

Accuracy

Column Univariate Bivariate
income 99.5% 96.5%
capital-gain 99.0% 95.8%
capital-loss 98.9% 95.9%
fnlwgt 98.8% 95.8%
age 98.0% 95.3%
occupation 98.0% 94.9%
workclass 97.8% 95.4%
race 97.6% 95.4%
education-num 97.2% 94.8%
education 97.2% 94.7%
native-country 96.8% 94.8%
marital-status 95.0% 93.7%
hours-per-week 95.0% 93.5%
sex 93.6% 92.8%
relationship 92.9% 92.2%
Total 97.0% 94.8%

Explainer
Accuracy of synthetic data is assessed by comparing the distributions of the synthetic (shown in green) and the original data (shown in gray). For each distribution plot we sum up the deviations across all categories, to get the so-called total variation distance (TVD). The reported accuracy is then simply reported as 100% - TVD. These accuracies are calculated for all univariate and bivariate distributions. A final accuracy score is then calculated as the average across all of these.

Similarity


Explainer
These plots show the first 3 principal components of training samples, synthetic samples, and (if available) holdout samples within the embedding space. The black dots visualize the centroids of the respective samples. The similarity metric then measures the cosine similarity between these centroids. We expect the cosine similarity to be close to 1, indicating that the synthetic samples are as similar to the training samples as the holdout samples are.

Distances

Synthetic vs. Training Data (Synthetic vs. Holdout Data)
Identical Matches 0.0% (0.0%)
Average Distances 0.244 (0.244)


Explainer
Synthetic data shall be as close to the original training samples, as it is close to original holdout samples, which serve us as a reference. This can be asserted empirically by measuring distances between synthetic samples to their closest original samples, whereas training and holdout sets are sampled to be of equal size. For the visualization above, the distances of synthetic samples to the training samples are displayed in green, and the distances of synthetic samples to the holdout samples (if available) displayed in gray. A green line that is significantly left of the gray line implies that synthetic samples are closer to the training samples than to the holdout samples, indicating that the data has overfitted to the training data. A green line that overlays with the gray line validates that the trained model indeed represents the general rules, that can be found in training just as well as in holdout samples.